ECE 6605 - Information Theory

Prof. Matthieu Bloch

Thursday, November 9, 2023

Announcements

  • Assignment 3
    • Posted earlier today
    • Due November 11, 2023 on Gradescope
    • Office hours today, also on Monday November 13, 2023 as needed
  • Last time
    • Gaussian channels
  • Today
    • More on Gaussian channels
    • Feedback
    • Soft-covering
  • Later this week
    • Secrecy and secret-key generation (?)

Last time

  • AWGN channels \[ Y = X+ N \textsf{ where } N\sim\calN(0,\sigma^2) \] Average power-constraint: \(\frac{1}{n}\sum_{i=1}^n x_i^2\leq P\)

The capacity of the AWGN channel with average power constraint \(P\) \[ C = \frac{1}{2}\log_2 \left(1+\frac{P}{\sigma^2}\right) \]

  • Differential entropy \[ h(X) = \int dx p_X(x)\log_2 p_X(x) \]

The distribution that maximizes the relative entropy suject to a second moment constraint \(P\) is the Normal distribution with zero mean and variance \(P\)

Water filling

Channels with feedback

Soft covering

Image
Soft covering coding model
  • We assume for now that the message is uniformly distributed and that the input reference distirbution \(P_X\) is known and fixed.

A \((n,M_n)\) code \(\calC\) for soft covering over a discrete memoryless source \(P_{Z|X}\) consists of an encoding function \(f_n:\{1,\cdots,M_n\}\to\calX^n\)

  • The performance of an \((n,M_n)\) code is measured in terms of
    1. The rate of the code \(R\eqdef \frac{\log_2 M_n}{n}\) (bits/source symbol)
    2. The approximation of the output statistics \[ S^{(n)}\eqdef \mathbb{V}(P_Z^n,P_Z^{\otimes n}) \]

Channel resolvability

  • Define \[ C_{\textsf{r}}\eqdef \inf\left\{R: \exists \set{(f_n,g_n)}_{n\geq 1} \textsf{ with }\lim_{n\to\infty}\frac{\log M_n}{n}\leq R \text{ and } \lim_{n\to\infty}D^{(n)}=0\right\} \]

For a discrete memoryless channel characterized by \(P_{Z|X}\) and an input \(P_X\), \(C_{\textsf{r}} = \mathbb{I}(X;Z)\)

  • Remarks
    • This result is called the channel coding theorem
    • Given a statistical characterization of a channel and an input distribution, we can randomize to lose structure
  • Proof of the result
    • We will use again an achievability and a converse

Achievability (random coding)

  • Consider a generic channel \((\calU,P_{V|U},\calV)\) with message set \(\{1,\cdots,M\}\)

  • For \(\gamma>0\), let

    \[\begin{align*} \calC_\gamma \eqdef \left\{(u,v)\in\calU\times\calV:\log\frac{P_{V|U}(v|u)}{P_V(v)}\leq\gamma\right\} \end{align*}\]

  • Encoder

    • For each \({m}\in\{1,\cdots,M\}\), \(f(m)\) is a codeword \(u\) drawn independently according to \(P_U\).

For any \(\gamma>0\),

\[\begin{align*} \E[C]{S(C)} \leq \P[P_UP_{V|U}]{(U,V)\notin \calC_\gamma} + \frac{1}{2}\sqrt{\frac{2^{\gamma}}{M}}. \end{align*}\]

Converse

Consider a sequence of codes \(\set{(f_n,g_n)}_{n\geq 1}\) such that \(\lim_{n\to\infty}S^{(n)}=0\) and \(\lim_{n\to\infty}\frac{\log M_n}{n}\geq R\). Then \[ R\geq \min_{P:P\circ P_{Z|X}=P_Z}\mathbb{I}(X;Z) \]